A. Pac Learning
نویسنده
چکیده
• Let X=R with orthonormal basis (e1, e2) and consider the set of concepts defined by the area inside a right triangle ABC with two sides parallel to the axes, with −−→ AB/AB = e1 and −→ AC/AC = e2, and AB/AC = α for some positive real α ∈ R+. Show, using similar methods to those used in the lecture slides for the axis-aligned rectangles, that this class can be (ǫ, δ)PAC-learned from training data of size m ≥ (3/ǫ) log(3/δ). As in the case of axis-aligned rectangles, consider three regions r1, r2, r3, along the sides of the target concept as indicated in Figure 1. Note that the triangle formed by the points A”, B”, C” is similar to ABC (same angles) since A”B” must be parallel to AB, and similarly for the other sides.
منابع مشابه
Computational Learning Theory Fall Semester , 2010 Lecture 3 : October 31
In this lecture we will talk about the PAC model. The PAC learning model is one of the important and famous learning model. PAC stands for Probably Approximately Correct, our goal is to learn a hypothesis from a hypothesis class such that in high con dence we will have a small error rate (approximately correct). We start the lecture with an intuitive example to explain the idea behind the PAC m...
متن کاملPerceptron, Winnow, and PAC Learning
We analyze the performance of the widely studied Perceptron andWinnow algorithms for learning linear threshold functions under Valiant’s probably approximately correct (PAC) model of concept learning. We show that under the uniform distribution on boolean examples, the Perceptron algorithm can efficiently PAC learn nested functions (a class of linear threshold functions known to be hard for Per...
متن کاملLower PAC bound on Upper Confidence Bound-based Q-learning with examples
Abstract Recently, there has been significant progress in understanding reinforcement learning in Markov decision processes (MDP). We focus on improving Q-learning and analyze its sample complexity. We investigate the performance of tabular Q-learning, Approximate Q-learning and UCB-based Q-learning. We also derive a lower PAC bound Ω( |S| |A| 2 ln |A| δ ) of UCB-based Q-learning. Two tasks, Ca...
متن کاملBoosting a Simple Weak Learner For Classifying Handwritten Digits
A weak PAC learner is one which takes labeled training examples and produces a classifier which can label test examples more accurately than random guessing. A strong learner (also known as a PAC learner), on the other hand, is one which takes labeled training examples and produces a classifier which can label test examples arbitrarily accurately. Schapire has constructively proved that a stron...
متن کاملGeneral Bounds on Statistical Query Learning and PAC Learning with Noise via Hypothesis Bounding
We derive general bounds on the complexity of learning in the Statistical Query model and in the PAC model with classification noise. We do so by considering the problem of boosting the accuracy of weak learning algorithms which fall within the Statistical Query model. This new model was introduced by Kearns [12] to provide a general framework for efficient PAC learning in the presence of class...
متن کاملUBEV - A More Practical Algorithm for Episodic RL with Near-Optimal PAC and Regret Guarantees
Statistical performance bounds for reinforcement learning (RL) algorithms can be critical for high-stakes applications like healthcare. This paper introduces a new framework for theoretically measuring the performance of such algorithms called Uniform-PAC, which is a strengthening of the classical Probably Approximately Correct (PAC) framework. In contrast to the PAC framework, the uniform vers...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2010